Goto

Collaborating Authors

 political ad


Viewers don't trust candidates who use generative AI in political ads, study finds

Engadget

Artificial intelligence is expected to have an impact on the upcoming US election in November. States have been trying to protect against misinformation by passing laws that require political advertisements to disclose when they have used generative AI. Twenty states now have rules on the books, and according to new research, voters have a negative reaction to seeing those disclaimers. That seems like a pretty fair response: If a politician uses generative AI to mislead voters, then voters don't appreciate that. The study was conducted by New York University's Center on Technology Policy and first reported by The Washington Post.


This Political Startup Wants to Help Progressives Win … With AI-Generated Ads

WIRED

Stories about AI-generated political content are like stories about people drunkenly setting off fireworks: There's a good chance they'll end in disaster. WIRED is tracking AI usage in political campaigns across the world, and so far examples include pornographic deepfakes and misinformation-spewing chatbots. It's gotten to the point where the US Federal Communications Commission has proposed mandatory disclosures for AI use in television and radio ads. Despite concerns, some US political campaigns are embracing generative AI tools. There's a growing category of AI-generated political content flying under the radar this election cycle, developed by startups including Denver-based BattlegroundAI, which uses generative AI to come up with digital advertising copy at a rapid clip.


'Disinformation on steroids': is the US prepared for AI's influence on the election?

The Guardian

The AI election is here. Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The "deepfake" calls were linked to two Texas companies, Life Corporation and Lingo Telecom. It's not clear if the deepfake calls actually prevented voters from turning out, but that doesn't really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that's been pushing for federal and state regulation of AI's use in politics. "I don't think we need to wait to see how many people got deceived to understand that that was the point," Gilbert said.


Worried About Deepfakes? Don't Forget "Cheapfakes"

WIRED

Over the summer, a political action committee (PAC) supporting Florida governor and presidential hopeful Ron DeSantis uploaded a video of former president Donald Trump on YouTube in which he appeared to attack Iowa governor Kim Reynolds. It wasn't exactly real--though the text was taken from one of Trump's tweets, the voice used in the ad was AI-generated. The video was subsequently removed, but it has spurred questions about the role generative AI will play in the 2024 elections in the US and around the world. While platforms and politicians are focusing on deepfakes--AI-generated content that might depict a real person saying something they didn't or an entirely fake person--experts told WIRED there's a lot more at stake. Long before generative AI became widely available, people were making "cheapfakes" or "shallowfakes."


Michigan to pass law demanding transparency in AI-generated political ads

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Michigan is joining an effort to curb deceptive uses of artificial intelligence and manipulated media through state-level policies as Congress and the Federal Elections Commission continue to debate more sweeping regulations ahead of the 2024 elections. Campaigns on the state and federal level will be required to clearly say which political advertisements airing in Michigan were created using artificial intelligence under legislation expected to be signed in the coming days by Gov. Gretchen Whitmer, a Democrat. It also would prohibit use of AI-generated deepfakes within 90 days of an election without a separate disclosure identifying the media as manipulated.


Tech Companies Are Taking Action on AI Election Misinformation. Will it Matter?

TIME - Tech

The announcement comes a day after Microsoft announced it was also taking a number of steps to protect elections, including offering tools to watermark AI-generated content and deploying a "Campaign Success Team" to advise political campaigns on AI, cybersecurity, and other related issues. Next year will be the most significant year for elections so far this century, with the U.S., India, the U.K., Mexico, Indonesia, and Taiwan all headed to the polls. Although many are concerned about the impact deepfakes and misinformation could have on elections, many experts stress the evidence for their impacts on elections so far is limited at best. Experts welcome the measures taken by tech companies to defend election integrity but say more fundamental changes to political systems will be required to tackle misinformation. Tech companies have come under scrutiny after the role they played in previous elections.


Meta will require campaigns to disclose use of AI in political ads

Washington Post - Technology News

The Meta announcement cited specific uses of AI that advertisers will have to disclose. They include ads showing an actual person saying or doing something they didn't say or do; depicting a realistic-looking individual who doesn't exist or a realistic-looking event that didn't happen; or altering footage of a real event. Also barred are ads that show a "realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event."


'Feel-good measure': Google to require visible disclosure in political ads using AI for images and audio

FOX News

Haywood Talcove, CEO of LexisNexis Risk Solutions Government Group, tells Fox News Digital that criminal groups, mostly in other countries, are advertising on social media to market their AI capabilities for fraud and other crimes. Google is set to require political advertising that uses artificial intelligence to generate images or sounds come with a visible disclosure for users. "AI-generated content should absolutely be disclosed in political advertisements. Not doing so leaves the American people open to misleading and predatory campaign ads," Ziven Havens, the Policy Director at the Bull Moose Project, told Fox News Digital. "In the absence of government action, we support the creation of new rulemaking to handle the new frontier of technology before it becomes a major problem" Havens' comments come after Google revealed last week that it will start requiring the disclosure of the use of AI to alter images in political ads starting in November, a little more than a year before the 2024 election, according to a PBS report.


TechScape: As the US election campaign heats up, so could the market for misinformation

The Guardian

X, the platform formerly known as Twitter, announced it will allow political advertising back on the platform – reversing a global ban on political ads since 2019. The move is the latest to stoke concerns about the ability of big tech to police online misinformation ahead of the 2024 elections – and X is not the only platform being scrutinised. Social media firms' handlings of misinformation and divisive speech reached a breaking point in the 2020 US presidential elections when Donald Trump used online platforms to rile up his base, culminating in the storming of the Capitol building on 6 January 2021. But in the time since, companies have not strengthened their policies to prevent such crises, instead slowly stripping protections away. This erosion of safeguards, coupled with the rise of artificial intelligence, could create a perfect storm for 2024, experts warn.


Is Congress Moving Too Slowly on A.I.?

Slate

At a White House summit on July 21, the Biden administration brought together the heads of seven different A.I. companies. A lot of the big names were there--Meta, Google, OpenAI--and they all signed "voluntary commitments" to safeguard artificial intelligence. In the Senate, Chuck Schumer is proposing a framework that legislators can use to tackle A.I. issues. But while the A.I. industry is moving at a breakneck pace, Washington is, as usual, slow to regulate. On Friday's episode of What Next: TBD, I spoke with Makena Kelly, who covers politics and policy for the Verge, about whether Washington can keep up with A.I. Our conversation has been edited and condensed for clarity.